Goto

Collaborating Authors

 reward perturbation


We will add a series of nu-2 merical experiments to demonstrate the minimax optimality of the model-3

Neural Information Processing Systems

We thank all reviewers for very helpful comments. This letter addresses several major questions raised by the reviewers. Indeed, reward perturbation is introduced merely to facilitate analysis. Take Section 4.3 of the Arxiv version We will elucidate the motivation and intuition of reward perturbation earlier on in the revised paper. We understand from the reviewer's comment that there might be confusion in our This will be made clear in the final paper.


Your Policy Regularizer is Secretly an Adversary

Brekelmans, Rob, Genewein, Tim, Grau-Moya, Jordi, Delétang, Grégoire, Kunesch, Markus, Legg, Shane, Ortega, Pedro

arXiv.org Machine Learning

Policy regularization methods such as maximum entropy regularization are widely used in reinforcement learning to improve the robustness of a learned policy. In this paper, we show how this robustness arises from hedging against worst-case perturbations of the reward function, which are chosen from a limited set by an imagined adversary. Using convex duality, we characterize this robust set of adversarial reward perturbations under KL and alpha-divergence regularization, which includes Shannon and Tsallis entropy regularization as special cases. Importantly, generalization guarantees can be given within this robust set. We provide detailed discussion of the worst-case reward perturbations, and present intuitive empirical examples to illustrate this robustness and its relationship with generalization. Finally, we discuss how our analysis complements and extends previous results on adversarial reward robustness and path consistency optimality conditions.